Goto

Collaborating Authors

 author response



Author Response to Reviews

Neural Information Processing Systems

Thank you for your time in reading the paper and the positive feedback! Below are responses for each reviewer. Thank you for your detailed reading of the paper and positive feedback! With even more runs, we expect this distinction will be even clearer. This is the core of the transfer learning question, and a central part of our paper.


Author Response

Neural Information Processing Systems

We thank the reviewers for their valuable feedback. We will address the comments and the concerns as follows. MMAML does not use more data. MMAML does not have this assumption. We will clarify all these points in the revised paper.


Author Response: Stochastic Bandits with Context Distributions

Neural Information Processing Systems

First, we would like to thank all reviewers for their valuable feedback. We address all concerns raised below. Thank you for pointing out the reference (Lamprier et al., 2018). The setting of Lamprier et al. (2018) uses a fixed context distribution Our setting specializes to the setting of Lamprier et al. (2018) if the context distribution is fixed and the On a technical level, Lamprier et al. (2018, Proposition 5) controls the deviation in the We will add discussion of this related work to the updated version of our paper. This leads to non-trivial regret for some algorithms.


Uncertainty on Asynchronous Event Prediction: Author Response

Neural Information Processing Systems

In particular, high variance is penalized by the UCE which is particularly important during training. Dir model are shown in Figure 1. If the size of the RNN's hidden state is Gaussian function; and for the GP model to evaluate the kernel function. We will add these discussions to the paper. We will extend the related work section based on your feedback.


Author Response for " Generating Diverse High-Fidelity Images with VQV AE-2"

Neural Information Processing Systems

We thank the reviewers for the detailed and constructive feedback. R2 had positive remarks about the significance of our method and the thoroughness of our evaluation. We believe these clarifications will resolve all reviewers' concerns, but would be R1 - Interpolations: There is indeed no simple way to do interpolations, which We will clarify in the final version. We also implemented incremental sampling (as in Paine et al. arxiv.org/abs/1611.09482) Average version which does not use stop-gradients.


Author Response: We thank all our reviewers for their careful evaluation and numerous suggestions

Neural Information Processing Systems

Author Response: We thank all our reviewers for their careful evaluation and numerous suggestions. We agree that evaluating on just in-domain sentences is a limitation. Recoverability estimates are nearly perfect for both the large and medium models. Our results and conclusions hold only for English, and this is a limitation of the work. We will add this to the paper.


Author Response for ' Shaping Belief States with Generative Environment Models for RL '

Neural Information Processing Systems

We are grateful to all constructive and actionable feedback provided by the reviewers. We believe to have addressed the key concerns raised by the reviewers below. 's concerns with our main hypothesis as it has not We are working to improve our explanations in section 2.2 based on all feedback We emphasize that careful empirical experimentation in ML can also bring valuable insights to the community. Studying these factors require an intersectional empirical study such as this paper. Probabilistic models benefit more from overshoot than Deterministic models.


Author Responses for " Learning Erd os-Rényi Random Graphs via Edge Detecting Queries "

Neural Information Processing Systems

Regarding the minor clarity issues, we will adjust Figure 1 according to these suggestions and fix the typos stated. If we understand correctly, the reviewer's main concerns are that the numerical results are not comprehensive. We compared COMP/DD/SSS/LP experimentally because these all use the same test matrix (i.i.d. Reviewer 3's suggestions, we believe it would belong in the supplementary material and not the main body.


Author response for An Analysis of SVD for Deep Rotation Estimation, Paper ID11801

Neural Information Processing Systems

There is ample evidence that our contribution would be received as surprising by the research community. Deep learning research for vision/robotics applications is judged on the output (e.g. That domain experts do not consider SVD (L40, [19,4,30,24]) indicates our results would be surprising. This is supported by other reviewers, e.g. " Given the surprising and comprehensive empirical " if the continuity described in section 3.4 is the same type of'global right-inverse' continuity described in [47].